7 research outputs found

    Epidemic Control on a Large-Scale-Agent-Based Epidemiology Model using Deep Deterministic Policy Gradient

    Full text link
    To mitigate the impact of the pandemic, several measures include lockdowns, rapid vaccination programs, school closures, and economic stimulus. These interventions can have positive or unintended negative consequences. Current research to model and determine an optimal intervention automatically through round-tripping is limited by the simulation objectives, scale (a few thousand individuals), model types that are not suited for intervention studies, and the number of intervention strategies they can explore (discrete vs continuous). We address these challenges using a Deep Deterministic Policy Gradient (DDPG) based policy optimization framework on a large-scale (100,000 individual) epidemiological agent-based simulation where we perform multi-objective optimization. We determine the optimal policy for lockdown and vaccination in a minimalist age-stratified multi-vaccine scenario with a basic simulation for economic activity. With no lockdown and vaccination (mid-age and elderly), results show optimal economy (individuals below the poverty line) with balanced health objectives (infection, and hospitalization). An in-depth simulation is needed to further validate our results and open-source our framework

    Indian Legal NLP Benchmarks : A Survey

    Full text link
    Availability of challenging benchmarks is the key to advancement of AI in a specific field.Since Legal Text is significantly different than normal English text, there is a need to create separate Natural Language Processing benchmarks for Indian Legal Text which are challenging and focus on tasks specific to Legal Systems. This will spur innovation in applications of Natural language Processing for Indian Legal Text and will benefit AI community and Legal fraternity. We review the existing work in this area and propose ideas to create new benchmarks for Indian Legal Natural Language Processing

    Electronic health records (EHR) quality control and temporal data analysis for clinical decision support

    Get PDF
    US healthcare is undergoing major reforms towards evidence based and precision medicine with an emphasis on data driven models. There is a strong impetus for improving the quality of healthcare while decreasing the cost incurred. The objective of this research is to develop methodologies for clinical-decision support which target acute and chronic care using EHR data. We address current data mining challenges such as 1) missing data 2) the sequential nature of records in the ICU and 3) integration of heterogenous data for analysis. In this thesis, we developed novel strategies to solve these issues and contribute to this field of computer-aided diagnosis, with the following three specific aims: 1) To improve predictive performance by developing data imputation techniques for missing data in EHR, 2) To develop predictive models and personalized temporal risk profiling for temporal EHR data, and 3) To integrate EHR data using deep learning based predictive models at multiple temporal resolutions and modalities. In the first aim we focus on the data issues and is aimed at solving challenges such as missing data. We divide the missing data into multiple types on the basis of the statistical properties of the data and develop novel methodologies to impute each missing data type. In the second aim, we perform temporal analysis of quality-controlled data and compare with conventional non-temporal analysis. We also integrate survival analysis into temporal predictive modeling to visualize personalized patient risk profiles across time. In the third aim we develop deep architectures for data integration across multiple temporal scales and modalities. We also develop techniques for interpreting the features and results from deep models. We demonstrate our results using data from the intensive care units and Alzheimer’s disease populations, for end-points such as the prediction of ICU readmission, mortality, length of stay, and Alzheimer’s stage detection.Ph.D

    Multimodal deep learning models for early detection of Alzheimer’s disease stage

    No full text
    Abstract Most current Alzheimer’s disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze imaging (magnetic resonance imaging (MRI)), genetic (single nucleotide polymorphisms (SNPs)), and clinical test data to classify patients into AD, MCI, and controls (CN). We use stacked denoising auto-encoders to extract features from clinical and genetic data, and use 3D-convolutional neural networks (CNNs) for imaging data. We also develop a novel data interpretation method to identify top-performing features learned by the deep-models with clustering and perturbation analysis. Using Alzheimer’s disease neuroimaging initiative (ADNI) dataset, we demonstrate that deep models outperform shallow models, including support vector machines, decision trees, random forests, and k-nearest neighbors. In addition, we demonstrate that integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and meanF1 scores. Our models have identified hippocampus, amygdala brain areas, and the Rey Auditory Verbal Learning Test (RAVLT) as top distinguished features, which are consistent with the known AD literature

    A Pilot Biomedical Engineering Course in Rapid Prototyping for Mobile Health

    No full text
    4 Halama
    corecore